18 research outputs found

    Payoff Performance of Fictitious Play

    Full text link
    We investigate how well continuous-time fictitious play in two-player games performs in terms of average payoff, particularly compared to Nash equilibrium payoff. We show that in many games, fictitious play outperforms Nash equilibrium on average or even at all times, and moreover that any game is linearly equivalent to one in which this is the case. Conversely, we provide conditions under which Nash equilibrium payoff dominates fictitious play payoff. A key step in our analysis is to show that fictitious play dynamics asymptotically converges the set of coarse correlated equilibria (a fact which is implicit in the literature).Comment: 16 pages, 4 figure

    Topics arising from fictitious play dynamics

    Get PDF
    In this thesis, we present a few different topics arising in the study of the learning dynamics called fictitious play. We investigate the combinatorial properties of this dynamical system describing the strategy sequences of the players, and in particular deduce a combinatorial classification of zero-sum games with three strategies per player. We further obtain results about the limit sets and asymptotic payoff performance of fictitious play as a learning algorithm. In order to study coexistence of regular (periodic and quasi-periodic) and chaotic behaviour in fictitious play and a related continuous, piecewise affne flow on the threesphere, we look at its planar first return maps and investigate several model problems for such maps. We prove a non-recurrence result for non-self maps of regions in the plane, similar to Brouwer’s classical result for planar homeomorphisms. Finally, we consider a family of piecewise affne maps of the square, which is very similar to the first return maps of fictitious play, but simple enough for explicit calculations, and prove several results about its dynamics, particularly its invariant circles and regions

    Increasing the Action Gap: New Operators for Reinforcement Learning

    Full text link
    This paper introduces new optimality-preserving operators on Q-functions. We first describe an operator for tabular representations, the consistent Bellman operator, which incorporates a notion of local policy consistency. We show that this local consistency leads to an increase in the action gap at each state; increasing this gap, we argue, mitigates the undesirable effects of approximation and estimation errors on the induced greedy policies. This operator can also be applied to discretized continuous space and time problems, and we provide empirical results evidencing superior performance in this context. Extending the idea of a locally consistent operator, we then derive sufficient conditions for an operator to preserve optimality, leading to a family of operators which includes our consistent Bellman operator. As corollaries we provide a proof of optimality for Baird's advantage learning algorithm and derive other gap-increasing operators with interesting properties. We conclude with an empirical study on 60 Atari 2600 games illustrating the strong potential of these new operators
    corecore